skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Cao Son, Tran"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Logic programs (LPs) and argumentation frameworks (AFs) are two declarative knowledge representation (KR) formalisms used for different reasoning tasks. The purpose of this study is interlinking two different reasoning components. To this end, we introduce two frameworks: LPAF and AFLP. The former enables to use the result of argumentation in AF for reasoning in LP, while the latter enables to use the result of reasoning in LP for arguing in AF. These frameworks are extended to bidirectional frameworks in which AF and LP can exchange information with each other. We also investigate their connection to several general KR frameworks from the literature. 
    more » « less
  2. The action language m∗ employs the notion of update models in defining transitions between states. Given an action occurrence and a state, the update model of the action occurrence is automatically constructed from the given state and the observability of agents. A main criticism of this approach is that it cannot deal with situations when agents’ have incorrect beliefs about the observability of other agents. The present paper addresses this shortcoming by defining a new semantics for m∗ . The new semantics addresses the aforementioned problem of m∗ while maintaining the simplicity of its semantics; the new definitions continue to employ simple update models, with at most three events for all types of actions, which can be constructed given the action specification and independently from the state in which the action occurs. 
    more » « less
  3. In this paper, we present a system, called xASP, for generating explanations that explain why an atom belongs to (or does not belong to) an answer set of a given program. The system can generate all possible explanations for a query without the need to simplify the program before computing explanations, i.e., it works with non-ground programs. These properties distinguish xASP from existing systems such as 𝚡𝙲𝚕𝚒𝚗𝚐𝚘 , 𝙳𝚒𝚜𝚌𝙰𝚂𝙿 , exp(ASP𝑐) , and s(CASP) , which also generate explanations for queries to logic programs under the answer set semantics but simplify and ground the programs (the three systems 𝚡𝙲𝚕𝚒𝚗𝚐𝚘 , 𝙳𝚒𝚜𝚌𝙰𝚂𝙿 , exp(ASP𝑐) ) or do not always generate all possible explanations (the system s(CASP) ). In addition, the output of xASP is insensitive to syntactic variations such as the order conditions and the order of rules, which is also different from the output of s(CASP) . 
    more » « less
  4. In this paper we develop a state transition function for partially observable multi-agent epistemic domains and implement it using Answer Set Programming (ASP). The transition function computes the next state upon an occurrence of a single action. Thus it can be used as a module in epistemic planners. Our transition function incorporates ontic, sensing and announcement actions and allows for arbitrary nested belief formulae and general common knowledge. A novel feature of our model is that upon an action occurrence, an observing agent corrects his (possibly wrong) initial beliefs about action precondition and his observability. By examples, we show that this step is necessary for robust state transition. We establish some properties of our state transition function regarding its soundness in updating beliefs of agents consistent with their observability. 
    more » « less
  5. In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication. 
    more » « less